427 research outputs found

    Using spatio-temporal continuity constraints to enhance visual tracking of moving objects

    No full text
    We present a framework for annotating dynamic scenes involving occlusion and other uncertainties. Our system comprises an object tracker, an object classifier and an algorithm for reasoning about spatio-temporal continuity. The principle behind the object tracking and classifier modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine resolves error, ambiguity and occlusion to produce a most likely hypothesis, which is consistent with global spatio-temporal continuity constraints. The system results in improved annotation over frame-by-frame methods. It has been implemented and applied to the analysis of a team sports video

    Stain guided mean-shift filtering in automatic detection of human tissue nuclei

    Get PDF
    Background: As a critical technique in a digital pathology laboratory, automatic nuclear detection has been investigated for more than one decade. Conventional methods work on the raw images directly whose color/intensity homogeneity within tissue/cell areas are undermined due to artefacts such as uneven staining, making the subsequent binarization process prone to error. This paper concerns detecting cell nuclei automatically from digital pathology images by enhancing the color homogeneity as a pre-processing step. Methods: Unlike previous watershed based algorithms relying on post-processing of the watershed, we present a new method that incorporates the staining information of pathological slides in the analysis. This pre-processing step strengthens the color homogeneity within the nuclear areas as well as the background areas, while keeping the nuclear edges sharp. Proof of convergence for the proposed algorithm is also provided. After pre-processing, Otsu's threshold is applied to binarize the image, which is further segmented via watershed. To keep a proper compromise between removing overlapping and avoiding over-segmentation, a naive Bayes classifier is designed to refine the splits suggested by the watershed segmentation. Results: The method is validated with 10 sets of 1000 × 1000 pathology images of lymphoma from one digital slide. The mean precision and recall rates are 87% and 91%, corresponding to a mean F-score equal to 89%. Standard deviations for these performance indicators are 5.1%, 1.6% and 3.2% respectively. Conclusion: The precision/recall performance obtained indicates that the proposed method outperforms several other alternatives. In particular, for nuclear detection, stain guided mean-shift (SGMS) is more effective than the direct application of mean-shift in pre-processing. Our experiments also show that pre-processing the digital pathology images with SGMS gives better results than conventional watershed algorithms. Nevertheless, as only one type of tissue is tested in this paper, a further study is planned to enhance the robustness of the algorithm so that other types of tissues/stains can also be processed reliably

    A 3D Primary Vessel Reconstruction Framework with Serial Microscopy Images

    Get PDF
    Three dimensional microscopy images present significant potential to enhance biomedical studies. This paper presents an automated method for quantitative analysis of 3D primary vessel structures with histology whole slide images. With registered microscopy images, we identify primary vessels with an improved variational level set framework at each 2D slide. We propose a Vessel Directed Fitting Energy (VDFE) to provide prior information on vessel wall probability in an energy minimization paradigm. We find the optimal vessel cross-section associations along the image sequence with a two-stage procedure. Vessel mappings are first found between each pair of adjacent slides with a similarity function for four association cases. These bi-slide vessel components are further linked by Bayesian Maximum A Posteriori (MAP) estimation where the posterior probability is modeled as a Markov chain. The efficacy of the proposed method is demonstrated with 54 whole slide microscopy images of sequential sections from a human liver

    Liver whole slide image analysis for 3D vessel reconstruction

    Get PDF
    The emergence of digital pathology has enabled numerous quantitative analyses of histopathology structures. However, most pathology image analyses are limited to two-dimensional datasets, resulting in substantial information loss and incomplete interpretation. To address this, we have developed a complete framework for three-dimensional whole slide image analysis and demonstrated its efficacy on 3D vessel structure analysis with liver tissue sections. The proposed workflow includes components on image registration, vessel segmentation, vessel cross-section association, object interpolation, and volumetric rendering. For 3D vessel reconstruction, a cost function is formulated based on shape descriptors, spatial similarity and trajectory smoothness by taking into account four vessel association scenarios. An efficient entropy-based Relaxed Integer Programming (eRIP) method is proposed to identify the optimal inter-frame vessel associations. The reconstructed 3D vessels are both quantitatively and qualitatively validated. Evaluation results demonstrate high efficiency and accuracy of the proposed method, suggesting its promise to support further 3D vessel analysis with whole slide images

    A framework for 3D vessel analysis using whole slide images of liver tissue sections

    Get PDF
    Three-dimensional (3D) high resolution microscopic images have high potential for improving the understanding of both normal and disease processes where structural changes or spatial relationship of disease features are significant. In this paper, we develop a complete framework applicable to 3D pathology analytical imaging, with an application to whole slide images of sequential liver slices for 3D vessel structure analysis. The analysis workflow consists of image registration, segmentation, vessel cross-section association, interpolation, and volumetric rendering. To identify biologically-meaningful correspondence across adjacent slides, we formulate a similarity function for four association cases. The optimal solution is then obtained by constrained Integer Programming. We quantitatively and qualitatively compare our vessel reconstruction results with human annotations. Validation results indicate a satisfactory concordance as measured both by region-based and distance-based metrics. These results demonstrate a promising 3D vessel analysis framework for whole slide images of liver tissue sections

    Learning the Repair Urgency for a Decision Support System for Tunnel Maintenance

    Get PDF
    The transport network in many countries relies on extended portions which run underground in tunnels. As tunnels age, repairs are required to prevent dangerous collapses. However repairs are expensive and will affect the operational efficiency of the tunnel. We present a decision support system (DSS) based on supervised machine learning methods that learns to predict the risk factor and the resulting repair urgency in the tunnel maintenance planning of a European national rail operator. The data on which the prototype has been built consists of 47 tunnels of varying lengths. For each tunnel, periodic survey inspection data is available for multiple years, as well as other data such as the method of construction of the tunnel. Expert annotations are also available for each 10m tunnel segment for each survey as to the degree of repair urgency which are used for both training and model evaluation. We show that good predictive power can be obtained and discuss the relative merits of a number of learning methods

    3D mapping from partial observations: An application to utility mapping

    Get PDF
    Precise mapping of buried utilities is critical to managing massive urban underground infrastructure and preventing utility incidents. Most current research only focuses on generating such maps based on complete information of underground utilities. However, in real-world practice, it is rare that a full picture of buried utilities can be obtained for such mapping. Therefore, this paper explores the problem of generating maps from partial observations of a scene where the actual world is not fully observed. In particular, we focus on the problem of generating 2D/3D maps of buried utilities using a probabilistic based approach. This has the advantage that the method is generic and can be applied to various sources of utility detections, e.g. manhole observations, sensors, and existing records. In this paper, we illustrate our novel methods based on detections from manhole observations and sensor measurements. This paper makes the following new contributions. It is the first time that partial observations have been used to generate utility maps using optimization based approaches. It is the first time that such a large variety of utilities' properties have been considered, such as location, directions, type and size. Another novel contribution is that different kinds of connections are included to reflect the complex layout and structure of buried utilities. Finally, for the first time to the best of our knowledge, we have integrated utility detection, probability calculation, model formulation and map generation into a single framework. The proposed framework represents all detections using a common language of probability distributions and then formulates the mapping problem as an Integer Linear Programming (ILP) problem and the final map is generated based on the solution with the highest probability sum. The effectiveness of this system is evaluated on synthetic and real data using appropriate evaluation metrics

    3D reconstruction of multiple stained histology images.

    No full text
    Three dimensional (3D) tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications)

    Automated detection and delineation of lymph nodes in haematoxylin & eosin stained digitised slides.

    Get PDF
    Treatment of patients with oesophageal and gastric cancer (OeGC) is guided by disease stage, patient performance status and preferences. Lymph node (LN) status is one of the strongest prognostic factors for OeGC patients. However, survival varies between patients with the same disease stage and LN status. We recently showed that LN size from patients with OeGC might also have prognostic value, thus making delineations of LNs essential for size estimation and the extraction of other imaging biomarkers. We hypothesized that a machine learning workflow is able to: (1) find digital H&E stained slides containing LNs, (2) create a scoring system providing degrees of certainty for the results, and (3) delineate LNs in those images. To train and validate the pipeline, we used 1695 H&E slides from the OE02 trial. The dataset was divided into training (80%) and validation (20%). The model was tested on an external dataset of 826 H&E slides from the OE05 trial. U-Net architecture was used to generate prediction maps from which predefined features were extracted. These features were subsequently used to train an XGBoost model to determine if a region truly contained a LN. With our innovative method, the balanced accuracies of the LN detection were 0.93 on the validation dataset (0.83 on the test dataset) compared to 0.81 (0.81) on the validation (test) datasets when using the standard method of thresholding U-Net predictions to arrive at a binary mask. Our method allowed for the creation of an "uncertain" category, and partly limited false-positive predictions on the external dataset. The mean Dice score was 0.73 (0.60) per-image and 0.66 (0.48) per-LN for the validation (test) datasets. Our pipeline detects images with LNs more accurately than conventional methods, and high-throughput delineation of LNs can facilitate future LN content analyses of large datasets
    • …
    corecore